Skip to content

Release v0.3.24#76

Merged
bradhe merged 13 commits intomainfrom
develop
Jul 31, 2025
Merged

Release v0.3.24#76
bradhe merged 13 commits intomainfrom
develop

Conversation

@bradhe
Copy link
Contributor

@bradhe bradhe commented Jul 31, 2025

  • Updated local/remote support for llms calls.

datancoffee and others added 12 commits July 27, 2025 17:59
…tion

The goals of this PR was to allow developers to use LLMs across their development and production environments just by specifying a short model family name and without having lots of conditional statements handling differences in environments.

E.g. calling llms("llama3.2") should obtain a reference to the "llama3.2:3b" model when working with ollama local inference and a reference to "meta-llama/Llama-3.2-3B-Instruct" when using Hugging Face Hub serverless inference.

Differences in naming of models, and the fact that each model is actually a collection of dozen of versions that differ in number of parameters (distillation) and quantization techniques made the naming resolution a tricky task.

Here are the new capabilities of the llms() API:

1. Tower now recognizes ~170 names of model families as of August 2025.

2. Users can specify a model family e.g. llms("deepseek-r1") in both local and Tower cloud environments, and Tower will resolve the model family to a particular model that is available for inference:
- in local environment, Tower will find the model that is installed and, if there are multiple installed, pick the one with the largest number of parameters
- in Tower cloud environments, Tower will take the first model returned by HF search, making sure that this model is servable by the Inference Service, if specified by users

In addition to using model family names, Users can also specify a particular model both in local and Tower cloud environments:
locally: llms("deepseek-r1:14b") or llms("llama3.2:latest")
serverlessly: llms("deepseek-ai/DeepSeek-R1-0528") or llms("meta-llama/Llama-3.2-3B-Instruct")

Expected use:

A developer wants to use use a model of the "llama3.2" family first in development and then in production. They would add this code to their Tower app:

```
model_name=os.getenv("MODEL_NAME")
llm = llms(model_name)
```

They would set up their environments as follows:

"dev" environment:

Tower Secrets:

MODEL_NAME = "llama3.2"
(any model of this family installed locally will do)

TOWER_INFERENCE_ROUTER=ollama

"prod" environment:

Tower Secrets:

MODEL_NAME = meta-llama/Llama-3.2-3B-Instruct
(use a particular model)

TOWER_INFERENCE_ROUTER=hugging_face_hub
TOWER_INFERENCE_ROUTER_API_KEY=hf_123456789
TOWER_INFERENCE_SERVICE=<together|...>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…that-retrieves-best' of https://github.com/tower/tower-cli into feature/tow-501-sdk-implement-experimental-llm-feature-that-retrieves-best
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1. added HF Hub and Ollama as dependencies to pyproject.toml (not a brad comment)
2. bumped HF version to 0.34.3
3. updated pytest.ini.template with settings to find source code
4. in tower context and elsewhere, renamed inference_service to inference_provider
5. in llms, addressed bunch of comments from brad
6. removed test_env.py
7. addressed lots of brad comments re test_llms.py
8. updated uv.lock
…imental-llm-feature-that-retrieves-best

Initial check-in of changes to llms() API improving model name resolution
@bradhe bradhe changed the title Production deployment Release v0.3.24 Jul 31, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the LLM functionality to support both local and remote inference, replacing the previous hardcoded model mappings with a more flexible model resolution system that works with Ollama locally and Hugging Face Hub remotely.

Key Changes:

  • Refactored the LLM system to dynamically resolve model names instead of using hardcoded mappings
  • Added comprehensive test coverage for both local (Ollama) and remote (Hugging Face Hub) inference scenarios
  • Updated configuration to use generic inference router/provider terminology instead of Hugging Face-specific naming

Reviewed Changes

Copilot reviewed 10 out of 13 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/tower/_llms.py Complete rewrite of LLM functionality with dynamic model resolution, expanded model family support, and unified inference interface
src/tower/_context.py Updated context configuration to use generic inference terminology instead of Hugging Face-specific variables
tests/tower/test_llms.py New comprehensive test suite covering model resolution and inference for both local and remote scenarios
tests/tower/test_tables.py Fixed import path from tower.pyiceberg to pyiceberg
pyproject.toml Updated dependencies to support new LLM functionality
pytest.ini.template Added pytest configuration template
README.md Updated testing documentation
.vscode/settings.json Added VS Code pytest configuration
.github/workflows/build-binaries.yml Added python3-dev package for building dependencies
Comments suppressed due to low confidence (1)

src/tower/_llms.py:6

  • [nitpick] The import alias ollama_list_models is misleading since list is the actual function name. Consider using a more descriptive alias like ollama_list or importing it directly as list and using qualified calls like ollama.list().
from ollama import list as ollama_list_models

model_family = model_name.split(':')[0]
size = model.get('size', 0)
details = model.get('details', {})
parameter_size=details.get('parameter_size', '')
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing space after '=' in parameter assignment. Should be parameter_size = details.get('parameter_size', '') for consistency with other assignments in the same block.

Suggested change
parameter_size=details.get('parameter_size', '')
parameter_size = details.get('parameter_size', '')

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix in a future release.

size = model.get('size', 0)
details = model.get('details', {})
parameter_size=details.get('parameter_size', '')
quantization_level=details.get('quantization_level', '')
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing space after '=' in parameter assignment. Should be quantization_level = details.get('quantization_level', '') for consistency with other assignments in the same block.

Suggested change
quantization_level=details.get('quantization_level', '')
quantization_level = details.get('quantization_level', '')

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix in a future release.

raise ValueError(f"Inference router {ctx.inference_router} not supported.")

if ctx.inference_router == "ollama":
return resolve_ollama_model_name(ctx,requested_model)
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing space after comma in function call. Should be resolve_ollama_model_name(ctx, requested_model) for consistency with coding standards.

Suggested change
return resolve_ollama_model_name(ctx,requested_model)
return resolve_ollama_model_name(ctx, requested_model)

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix in a future release.

if ctx.inference_router == "ollama":
return resolve_ollama_model_name(ctx,requested_model)
elif ctx.inference_router == "hugging_face_hub":
return resolve_hugging_face_hub_model_name(ctx,requested_model)
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing space after comma in function call. Should be resolve_hugging_face_hub_model_name(ctx, requested_model) for consistency with coding standards.

Suggested change
return resolve_hugging_face_hub_model_name(ctx,requested_model)
return resolve_hugging_face_hub_model_name(ctx, requested_model)

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix in a future release.

- quantization_level: quantization level if specified
"""
try:
models = ollama_list_models()
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function ollama_list_models is imported as an alias for list from ollama, but list expects no arguments while this call passes none. However, the ollama list() function should be called to get the models. This may cause a runtime error if the function signature doesn't match expectations.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix in a future release.

@bradhe bradhe merged commit bc0c8ee into main Jul 31, 2025
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants